我们采用了近端迭代,以便在加固学习中进行价值函数优化。近端迭代是一种计算上有效的技术,使我们能够向更理想的解决方案偏置优化过程。作为近端迭代在深增强学习中的具体应用,我们将深度Q-Network(DQN)代理具有近期术语的目标函数,以确保DQN的在线网络组件仍保留在目标网络附近。我们用近端迭代调用DQN或DQNPRO的所得代理,在ATARI基准测试中对原始DQN的显着改进。我们的结果强调了采用深度增强学习的声音优化技术的力量。
translated by 谷歌翻译
我们考虑使用自动监督学习系统的数据表,不仅包含数字/分类列,而且还包含一个或多个文本字段。在这里,我们组装了18个多模式数据表,每个数据表都包含一些文本字段并源于真正的业务应用程序。我们的公开的基准使研究人员能够通过数字,分类和文本功能全面评估自己的监督学习方法。为了确保在所有18个数据集上执行良好的任何单一建模策略将作为多式化文本/表格自动机的实用基础,我们的基准中的不同数据集在:样本大小,问题类型(分类和回归任务组合),功能数量(数据集之间的文本列的数量范围为1到28),以及预测信号如何在文本与数字/分类特征(以及预测相互作用)之间分解。在此基准测试中,我们评估各种直接的流水线来模拟这些数据,包括标准的两阶段方法,其中NLP用于团体化文本,然后可以应用表格数据的自动机。与人类数据科学团队相比,在我们的基准测试(堆叠与各种树模型的堆栈组合多峰变压器的堆栈)的全自动方法也可以在两个机器预测竞赛中符合原始文本/表格数据和第二次在卡格的Mercari价格建议挑战中的地方(2380支球队)。
translated by 谷歌翻译
这本开源书代表了我们试图使深度学习的尝试,教读者的概念,上下文和代码。整本书都在jupyter笔记本上起草,无缝将博览会图,数学和交互式示例与独立代码相结合。我们的目标是提供一个可以(i)可以免费提供的资源;(ii)提供了足够的技术深度,以提供真正成为应用机器学习科学家的道路的起点;(iii)包括可运行的代码,向读者展示如何解决实践中的问题;(iv)允许我们和整个社区进行快速更新;(v)通过论坛进行补充,以互动讨论技术细节并回答问题。
translated by 谷歌翻译
分位数回归是统计学习中的一个基本问题,这是由于需要量化预测中的不确定性或对多样化的人群建模而不过分减少的统计学习。例如,流行病学预测,成本估算和收入预测都可以准确地量化可能的值的范围。因此,在计量经济学,统计和机器学习的多年研究中,已经为这个问题开发了许多模型。而不是提出另一种(新的)算法用于分位数回归,而是采用元观点:我们研究用于汇总任意数量的有条件分位模型的方法,以提高准确性和鲁棒性。我们考虑加权合奏,其中权重不仅可能因单个模型,而且要多于分位数和特征值而变化。我们在本文中考虑的所有模型都可以使用现代深度学习工具包适合,因此可以广泛访问(从实现的角度)和可扩展。为了提高预测分位数的准确性(或等效地,预测间隔),我们开发了确保分位数保持单调排序的工具,并采用保形校准方法。可以使用这些,而无需对原始模型的原始库进行任何修改。我们还回顾了一些围绕分数聚集和相关评分规则的基本理论,并为该文献做出了一些新的结果(例如,在分类或等渗后回归只能提高加权间隔得分的事实)。最后,我们提供了来自两个不同基准存储库的34个数据集的广泛的经验比较套件。
translated by 谷歌翻译
依赖于太多的实验来学习良好的行动,目前的强化学习(RL)算法在现实世界的环境中具有有限的适用性,这可能太昂贵,无法探索探索。我们提出了一种批量RL算法,其中仅使用固定的脱机数据集来学习有效策略,而不是与环境的在线交互。批量RL中的有限数据产生了在培训数据中不充分表示的状态/行动的价值估计中的固有不确定性。当我们的候选政策从生成数据的候选政策发散时,这导致特别严重的外推。我们建议通过两个直接的惩罚来减轻这个问题:减少这种分歧的政策限制和减少过于乐观估计的价值约束。在全面的32个连续动作批量RL基准测试中,我们的方法对最先进的方法进行了比较,无论如何收集离线数据如何。
translated by 谷歌翻译
Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.
translated by 谷歌翻译
We study the problem of designing models for machine learning tasks defined on sets. In contrast to traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets that are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics [1], to anomaly detection in piezometer data of embankment dams [2], to cosmology [3,4]. Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We also derive the necessary and sufficient conditions for permutation equivariance in deep models. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译